Goto

Collaborating Authors

 Memphis


Elon Musk is making a big bet on his future vision – will it work?

New Scientist

Elon Musk is making a big bet on his future vision - will it work? Reports suggest that Elon Musk is eyeing up a merger involving SpaceX, Tesla and xAI, but what does he hope to achieve by consolidating his business empire? Elon Musk is a busy man, heading up multiple billion-dollar companies. While he is increasingly a divisive figure, there is no doubt that Tesla and SpaceX, his two most important ventures, have done much to advance the future of electric cars and spacecraft, respectively. But a series of corporate moves this week suggests Musk has a new vision of the future - and he may be combining all his companies to get there.


Elon Musk's xAI announces it has raised 20bn amid backlash over Grok deepfakes

The Guardian

AI company's chatbot faces criticism over its generation of sexualized, nonconsensual images of women and girls Elon Musk's artificial intelligence company has raised $20bn in its latest funding round, the startup announced Tuesday, even as its marquee chatbot Grok faces backlash over generating sexualized, nonconsensual images of women and underage girls. The funding round exceeded its initial $15bn target, according to xAI's press release. The company touted Grok's image-generation abilities in the announcement of its latest funding round. Nonetheless, the company has been able to win government contracts and billions of dollars in investment amid the AI boom. Over the past week, Grok has responded to tens of thousands of prompts from users on X requesting the chatbot remove women's clothing in images or pose them in sexualized ways.


Elon Musk's 2025 recap: how the world's richest person became its most chaotic

The Guardian

Though the drama surrounding Elon Musk was frequently absurd and unpredictable, it was also consequential. Though the drama surrounding Elon Musk was frequently absurd and unpredictable, it was also consequential. Elon Musk's 2025 recap: how the world's richest person became its most chaotic How the tech CEO and'Dogefather' made a mess of the year - from an apparent Nazi salute during his White House tenure to Tesla sales slumps and Starship explosions T he year of 2025 was dizzying for Elon Musk . The tech titan began the year holding court with Donald Trump in Washington DC. As the months ticked by, one public appearance after another baffled the US and the world.


When Privacy Isn't Synthetic: Hidden Data Leakage in Generative AI Models

Mustaqim, S. M., Kotal, Anantaa, Yi, Paul H.

arXiv.org Artificial Intelligence

Generative models are increasingly used to produce privacy-preserving synthetic data as a safe alternative to sharing sensitive training datasets. However, we demonstrate that such synthetic releases can still leak information about the underlying training samples through structural overlap in the data manifold. We propose a black-box membership inference attack that exploits this vulnerability without requiring access to model internals or real data. The attacker repeatedly queries the generative model to obtain large numbers of synthetic samples, performs unsupervised clustering to identify dense regions of the synthetic distribution, and then analyzes cluster medoids and neighborhoods that correspond to high-density regions in the original training data. These neighborhoods act as proxies for training samples, enabling the adversary to infer membership or reconstruct approximate records. Our experiments across healthcare, finance, and other sensitive domains show that cluster overlap between real and synthetic data leads to measurable membership leakage-even when the generator is trained with differential privacy or other noise mechanisms. The results highlight an under-explored attack surface in synthetic data generation pipelines and call for stronger privacy guarantees that account for distributional neighborhood inference rather than sample-level memorization alone, underscoring its role in privacy-preserving data publishing. Implementation and evaluation code are publicly available at:github.com/Cluster-Medoid-Leakage-Attack.


Description of Corner Cases in Automated Driving: Goals and Challenges

Bogdoll, Daniel, Breitenstein, Jasmin, Heidecker, Florian, Bieshaar, Maarten, Sick, Bernhard, Fingscheidt, Tim, Zöllner, J. Marius

arXiv.org Artificial Intelligence

Scaling the distribution of automated vehicles requires handling various unexpected and possibly dangerous situations, termed corner cases (CC). Since many modules of automated driving systems are based on machine learning (ML), CC are an essential part of the data for their development. However, there is only a limited amount of CC data in large-scale data collections, which makes them challenging in the context of ML. With a better understanding of CC, offline applications, e.g., dataset analysis, and online methods, e.g., improved performance of automated driving systems, can be improved. While there are knowledge-based descriptions and taxonomies for CC, there is little research on machine-interpretable descriptions. In this extended abstract, we will give a brief overview of the challenges and goals of such a description.





Readability Measures and Automatic Text Simplification: In the Search of a Construct

Cardon, Rémi, Doğruöz, A. Seza

arXiv.org Artificial Intelligence

Readability is a key concept in the current era of abundant written information. To help making texts more readable and make information more accessible to everyone, a line of researched aims at making texts accessible for their target audience: automatic text simplification (ATS). Lately, there have been studies on the correlations between automatic evaluation metrics in ATS and human judgment. However, the correlations between those two aspects and commonly available readability measures (such as readability formulas or linguistic features) have not been the focus of as much attention. In this work, we investigate the place of readability measures in ATS by complementing the existing studies on evaluation metrics and human judgment, on English. We first discuss the relationship between ATS and research in readability, then we report a study on correlations between readability measures and human judgment, and between readability measures and ATS evaluation metrics. We identify that in general, readability measures do not correlate well with automatic metrics and human judgment. We argue that as the three different angles from which simplification can be assessed tend to exhibit rather low correlations with one another, there is a need for a clear definition of the construct in ATS.


Evaluating LLM-Contaminated Crowdsourcing Data Without Ground Truth

Zhang, Yichi, Pang, Jinlong, Zhu, Zhaowei, Liu, Yang

arXiv.org Artificial Intelligence

The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of large language models (LLMs) by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimensional training data such as text, making them unsuitable for annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction -- a mechanism that evaluates the information within workers' responses without using ground truth -- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our approach quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a crowdsourcing model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.